Montevideo
- South America > Uruguay > Montevideo > Montevideo (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Blur2seq: Blind Deblurring and Camera Trajectory Estimation from a Single Camera Motion-blurred Image
Carbajal, Guillermo, Almansa, Andrés, Musé, Pablo
Motion blur caused by camera shake, particularly under large or rotational movements, remains a major challenge in image restoration. We propose a deep learning framework that jointly estimates the latent sharp image and the underlying camera motion trajectory from a single blurry image. Our method leverages the Projective Motion Blur Model (PMBM), implemented efficiently using a differentiable blur creation module compatible with modern networks. A neural network predicts a full 3D rotation trajectory, which guides a model-based restoration network trained end-to-end. This modular architecture provides interpretability by revealing the camera motion that produced the blur. Moreover, this trajectory enables the reconstruction of the sequence of sharp images that generated the observed blurry image. To further refine results, we optimize the trajectory post-inference via a reblur loss, improving consistency between the blurry input and the restored output. Extensive experiments show that our method achieves state-of-the-art performance on both synthetic and real datasets, particularly in cases with severe or spatially variant blur, where end-to-end deblurring networks struggle. Code and trained models are available at https://github.com/GuillermoCarbajal/Blur2Seq/
- Europe > France (0.04)
- South America > Uruguay > Montevideo > Montevideo (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Media > Television (0.81)
- Media > Photography (0.81)
- Media > Film (0.81)
Disentangling Neurodegeneration with Brain Age Gap Prediction Models: A Graph Signal Processing Perspective
Sihag, Saurabh, Mateos, Gonzalo, Ribeiro, Alejandro
Neurodegeneration, characterized by the progressive loss of neuronal structure or function, is commonly assessed in clinical practice through reductions in cortical thickness or brain volume, as visualized by structural MRI. While informative, these conventional approaches lack the statistical sophistication required to fully capture the spatially correlated and heterogeneous nature of neurodegeneration, which manifests both in healthy aging and in neurological disorders. To address these limitations, brain age gap has emerged as a promising data-driven biomarker of brain health. The brain age gap prediction (BAGP) models estimate the difference between a person's predicted brain age from neuroimaging data and their chronological age. The resulting brain age gap serves as a compact biomarker of brain health, with recent studies demonstrating its predictive utility for disease progression and severity. However, practical adoption of BAGP models is hindered by their methodological obscurities and limited generalizability across diverse clinical populations. This tutorial article provides an overview of BAGP and introduces a principled framework for this application based on recent advancements in graph signal processing (GSP). In particular, we focus on graph neural networks (GNNs) and introduce the coVariance neural network (VNN), which leverages the anatomical covariance matrices derived from structural MRI. VNNs offer strong theoretical grounding and operational interpretability, enabling robust estimation of brain age gap predictions. By integrating perspectives from GSP, machine learning, and network neuroscience, this work clarifies the path forward for reliable and interpretable BAGP models and outlines future research directions in personalized medicine.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.28)
- North America > United States > California (0.14)
- South America > Uruguay > Montevideo > Montevideo (0.04)
- (5 more...)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.65)
- South America > Uruguay > Montevideo > Montevideo (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
An AutoML Framework using AutoGluonTS for Forecasting Seasonal Extreme Temperatures
Rodríguez-Bocca, Pablo, Pereira, Guillermo, Kiedanski, Diego, Collazo, Soledad, Basterrech, Sebastián, Rubino, Gerardo
In recent years, great progress has been made in the field of forecasting meteorological variables. Recently, deep learning architectures have made a major breakthrough in forecasting the daily average temperature over a ten-day horizon. However, advances in forecasting events related to the maximum temperature over short horizons remain a challenge for the community. A problem that is even more complex consists in making predictions of the maximum daily temperatures in the short, medium, and long term. In this work, we focus on forecasting events related to the maximum daily temperature over medium-term periods (90 days). Therefore, instead of addressing the problem from a meteorological point of view, this article tackles it from a climatological point of view. Due to the complexity of this problem, a common approach is to frame the study as a temporal classification problem with the classes: maximum temperature "above normal", "normal" or "below normal". From a practical point of view, we created a large historical dataset (from 1981 to 2018) collecting information from weather stations located in South America. In addition, we also integrated exogenous information from the Pacific, Atlantic, and Indian Ocean basins. We applied the AutoGluonTS platform to solve the above-mentioned problem. This AutoML tool shows competitive forecasting performance with respect to large operational platforms dedicated to tackling this climatological problem; but with a "relatively" low computational cost in terms of time and resources.
- Indian Ocean (0.25)
- North America > United States (0.14)
- South America > Brazil (0.14)
- (15 more...)
- Research Report > New Finding (0.66)
- Research Report > Promising Solution (0.66)
Mechanistic Interpretability with SAEs: Probing Religion, Violence, and Geography in Large Language Models
Simbeck, Katharina, Mahran, Mariam
Despite growing research on bias in large language models (LLMs), most work has focused on gender and race, with little attention to religious identity. This paper explores how religion is internally represented in LLMs and how it intersects with concepts of violence and geography. Using mechanistic interpretability and Sparse Autoencoders (SAEs) via the Neuronpedia API, we analyze latent feature activations across five models. We measure overlap between religion- and violence-related prompts and probe semantic patterns in activation contexts. While all five religions show comparable internal cohesion, Islam is more frequently linked to features associated with violent language. In contrast, geographic associations largely reflect real-world religious demographics, revealing how models embed both factual distributions and cultural stereotypes. These findings highlight the value of structural analysis in auditing not just outputs but also internal representations that shape model behavior.
- North America > United States > New York > New York County > New York City (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.14)
- (225 more...)
The Morgan-Pitman Test of Equality of Variances and its Application to Machine Learning Model Evaluation and Selection
Arratia, Argimiro, Cabaña, Alejandra, Mordecki, Ernesto, Rovira-Parra, Gerard
Model selection in non-linear models often prioritizes performance metrics over statistical tests, limiting the ability to account for sampling variability. We propose the use of a statistical test to assess the equality of variances in forecasting errors. The test builds upon the classic Morgan-Pitman approach, incorporating enhancements to ensure robustness against data with heavy-tailed distributions or outliers with high variance, plus a strategy to make residuals from machine learning models statistically independent. Through a series of simulations and real-world data applications, we demonstrate the test's effectiveness and practical utility, offering a reliable tool for model evaluation and selection in diverse contexts.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Spain (0.05)
- South America > Uruguay > Montevideo > Montevideo (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
A Case Study on the Effectiveness of LLMs in Verification with Proof Assistants
Bayazıt, Barış, Li, Yao, Si, Xujie
Large language models (LLMs) can potentially help with verification using proof assistants by automating proofs. However, it is unclear how effective LLMs are in this task. In this paper, we perform a case study based on two mature Rocq projects: the hs-to-coq tool and Verdi. We evaluate the effectiveness of LLMs in generating proofs by both quantitative and qualitative analysis. Our study finds that: (1) external dependencies and context in the same source file can significantly help proof generation; (2) LLMs perform great on small proofs but can also generate large proofs; (3) LLMs perform differently on different verification projects; and (4) LLMs can generate concise and smart proofs, apply classical techniques to new definitions, but can also make odd mistakes.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- (32 more...)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Asia > China (0.04)
- South America > Uruguay > Montevideo > Montevideo (0.04)
- (12 more...)
Te Ahorré Un Click: A Revised Definition of Clickbait and Detection in Spanish News
Mordecki, Gabriel, Moncecchi, Guillermo, Couto, Javier
We revise the definition of clickbait, which lacks current consensus, and argue that the creation of a curiosity gap is the key concept that distinguishes clickbait from other related phenomena such as sensationalism and headlines that do not deliver what they promise or diverge from the article. Therefore, we propose a new definition: clickbait is a technique for generating headlines and teasers that deliberately omit part of the information with the goal of raising the readers' curiosity, capturing their attention and enticing them to click. We introduce a new approach to clickbait detection datasets creation, by refining the concept limits and annotations criteria, minimizing the subjectivity in the decision as much as possible. Following it, we created and release TA1C (for Te Ahorré Un Click, Spanish for Saved You A Click), the first open source dataset for clickbait detection in Spanish. It consists of 3,500 tweets coming from 18 well known media sources, manually annotated and reaching a 0.825 Fleiss' κ inter annotator agreement. We implement strong baselines that achieve 0.84 in F1-score.
- South America > Peru (0.14)
- South America > Argentina (0.04)
- Oceania > Palau (0.04)
- (8 more...)
- Media > News (1.00)
- Marketing (1.00)
- Leisure & Entertainment > Sports (1.00)
- (2 more...)